CoreWeave offers cloud-based Grace Blackwell GPUs for AI training

Cloud services provider CoreWeave has announced it is offering Nvidia’s GB200 NVL72 systems, otherwise known as “Grace Blackwell,” to customers looking to do intensive AI training.

CoreWeave said its portfolio of cloud services are optimized for the GB200 NVL72, including CoreWeave’s Kubernetes Service, Slurm on Kubernetes (SUNK), Mission Control, and other services. CoreWeave’s Blackwell instances scale to up to 110,000 Blackwell GPUs with Nvidia Quantum-2 InfiniBand networking.

The GB200 NVL72 system is a massive and powerful system with 36 Grace CPUs and 72 Blackwell GPUs wired together to appear to the system as a single, massive processor. It is used for advanced large language model programming and training.



Source link

Leave a Comment